perm filename NONMON.CON[F84,JMC]1 blob
sn#775107 filedate 1984-11-06 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 nonmon.con[f84,jmc] Non-monotonic reasoning and controversy
C00015 ENDMK
Cā;
nonmon.con[f84,jmc] Non-monotonic reasoning and controversy
Recent developments in artificial intelligence research offer
hopes of understanding better controversies on political and social
issues. This essay expresses some views on the nature of such
controversy, why intelligent people often don't agree with each other,
and some long range hopes for improving the situation --- both by
improved theoretical understanding and eventually with the help of
computers.
On many scientific topics, sufficiently intelligent people
who carefully study the topic can reach agreement. There seems to
be no level of intelligence yet achieved by human beings that
makes agreement likely on many political and social issues, although
some opinions do seem to be ruled out by sufficient intelligence
and study.
There are many reasons for this, and perhaps most of them
are emotional, psychological and sociological, but I want to concentrate
on certain intellectual reasons, because they offer a chance of
applying the new ideas.
The new idea in AI is the formalization of {\it non-monotonic
reasoning}. Ordinary logical reasoning is monotonic in the sense
that if a conclusion follows from certain premises, then it still
follows if the set of premises is enlarged without dropping any of
the original premises. (The term {\it monotonic} is used in mathematics
for a function that increases when its argument increases). As
we shall see, common sense reasoning includes both monotonic and
non-monotonic steps, but when people think about reasoning, they mostly
think about the monotonic steps and suppose that the steps that
seem to be non-monotonic really are monotonic but involve premises
that haven't been mentioned.
Non-monotonic reasoning draws conclusions that might be taken
back if more facts are ``taken into account'', i.e. included among
the premises. Therefore, non-monotonic reasoning is conjectural
and one might hope to avoid it, because its results aren't certain.
We shall see that it can't be avoided.
Here's an example of good non-monnotonic reasoning that results
in conclusions that may have to be taken back when more facts are
taken into account.
Suppose one regards it as extremely important to reduce the
ionizing radiation in the human environment. One knows that the
use of nuclear power involves some inevitable release of ionizing
radiation and the possible release of more. Let it be granted that
how much radiation is likely to be released and its likely effects
are controversial. One can then reason that the release of radiation
can be minimized by reducing or eliminating the use of nuclear energy.
One way to reduce nuclear energy is to reduce the use of all energy,
and one way to do that is to use very well insulated houses that
exchange energy with the outside very slowly. Therefore, we conclude
that well insulated houses reduce the level of human exposure to
ionizing radiation.
In my view this conclusion is correctly reasoned, given the
collection of {\it facts that were taken into account}. However,
it turns out that some very well insulated houses in Sweden,
a country quick to act on advanced social ideas, have internal
levels of ionizing radiation that would be illegal in an American
uranium mine.
The fact that was not taken into account in the previous reasoning
is that granite and other rocks contain uranium that is always
decaying radioactively. One of the decay products is the gas radon
which decays again in a few days into a solid whose further decay
products are all solid. In poorly insulated houses, any radon produced
by decay of the building materials diffuses through cracks into the
atmosphere, is diluted and decays in a few days. However, in a well
insulated house, the radon stays in and reaches concentrations deemed
to be undesirable.
This fact about radon and the radiation level of well insulated
houses might have been figured out from first principles by someone
familiar with the known physics and the engineering of well insulated
houses. However, it actually became known in a different way, which itself
is instructive. In the 1950s, waste from uranium mines was used in
the form of gravel for making concrete used in building houses in
Colorado. In the late 1960s concern rose about radiation and it was
decided that the foundations of these houses should be sealed so that
radon couldn't escape. In order to monitor the success of the sealing
process, it was decided to establish what was the radon level in
houses for which uranium mill tailings had not been used. It was then
discovered that the level was quite variable and that it depended on
how well the house was insulated. The point of this story is that
an important fact about radiation levels was discovered only
accidentally even though it might have been conjectured from first
principles.
In my view the people who decided that good insulation would
reduce radiation cannot be criticized for not knowing about the
radon problem. (Some of them might be criticized for exaggerating
the importance of radiation or for refusing to take the radon
problem into account once it was pointed out, but this is another
matter). There is no way to exclude the possibility that there
is a phenomenon no-one has heard about that affects the matter
being considered. Demanding absolute assurance of one's conclusions
is a recipe for drawing no conclusions at all.
However, many, perhaps most, persistent controversies
involve major differences in what facts the parties take into account,
even when each side explicitly lists the facts it regards as relevant.
Consider the following exchange of views.
******
So far the discussion in this essay has been entirely informal,
but now I want to mention the recent studies and results in artificial
intelligence and argue their relevance to understanding the phenomenon
of non-monotonic reasoning in controversy.
Artificial intelligence is the study of how to make computer
programs that behave intelligently. Already in the 1950s it was
argued that the key problem was to understand common sense reasoning
and to describe the common sense world in a way that would permit
programs to reason about it. For many years the only reasoning
considered was monotonic, but in the early 1970s programs that used
default values were introduced. A default value for a variable is
a value it is assumed to have unless there is information that it
has some other value. In the middle 1970s more general forms of
non-monotonic reasoning were formalized. This discussion is based
on a method called {\it circumscription} which I first proposed in
1977 and elaborated in papers in 1980 and 1984.
Informally circumscription can be described as making the
conjecture that the entities of a certain kind that exist are just those
that have to exist given the facts that are being taken into account. In
the radiation example, circumscription of a suitably formalized version of
the original set of facts would lead to the conclusion that the only
sources of radiation are those known, i.e. those associated with nuclear
energy. Jumping to conclusions in this way, perhaps after some search
for additional sources, is necessary in order reach any conclusion.
Thus there could always be some mysterious new source of radiation
that, never before experienced, that would be excited by exactly
the measures one proposes to take.
1. Understanding that we must always take a limited collection
of facts into account suggests the following.